8 research outputs found

    Reinforcement learning or active inference?

    Get PDF
    This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain

    The Global dynamical complexity of the human brain network

    Get PDF
    How much information do large brain networks integrate as a whole over the sum of their parts? Can the dynamical complexity of such networks be globally quantified in an information-theoretic way and be meaningfully coupled to brain function? Recently, measures of dynamical complexity such as integrated information have been proposed. However, problems related to the normalization and Bell number of partitions associated to these measures make these approaches computationally infeasible for large-scale brain networks. Our goal in this work is to address this problem. Our formulation of network integrated information is based on the Kullback-Leibler divergence between the multivariate distribution on the set of network states versus the corresponding factorized distribution over its parts. We find that implementing the maximum information partition optimizes computations. These methods are well-suited for large networks with linear stochastic dynamics. We compute the integrated information for both, the system’s attractor states, as well as non-stationary dynamical states of the network. We then apply this formalism to brain networks to compute the integrated information for the human brain’s connectome. Compared to a randomly re-wired network, we find that the specific topology of the brain generates greater information complexity.This work has been supported by the European Research Council’s CDAC project: “The Role of Consciousness in Adaptive Behavior: A Combined Empirical, Computational and Robot based Approach” (ERC-2013- ADG 341196)

    Dynamical features of higher-order correlation events: impact on cortical cells

    No full text
    Cortical neurons receive signals from thousands of other neurons. The statistical properties of the input spike trains substantially shape the output response properties of each neuron. Experimental and theoretical investigations have mostly focused on the second order statistical features of the input spike trains (mean firing rates and pairwise correlations). Little is known of how higher order correlations affect the integration and firing behavior of a cell independently of the second order statistics. To address this issue, we simulated the dynamics of a population of 5000 neurons, controlling both their second order and higher-order correlation properties to reflect physiological data. We then used these ensemble dynamics as the input stage to morphologically reconstructed cortical cells (layer 5 pyramidal, layer 4 spiny stellate cell), and to an integrate and fire neuron. Our results show that changes done solely to the higher-order correlation properties of the network’s dynamics significantly affect the response properties of a target neuron, both in terms of output rate and spike timing. Moreover, the neuronal morphology and voltage dependent mechanisms of the target neuron considerably modulate the quantitative aspects of these effects. Finally, we show how these results affect sparseness of neuronal representations, tuning properties, and feature selectivity of cortical cells
    corecore